28 research outputs found

    Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices

    Full text link
    Restricted Isometry Constants (RICs) provide a measure of how far from an isometry a matrix can be when acting on sparse vectors. This, and related quantities, provide a mechanism by which standard eigen-analysis can be applied to topics relying on sparsity. RIC bounds have been presented for a variety of random matrices and matrix dimension and sparsity ranges. We provide explicitly formulae for RIC bounds, of n by N Gaussian matrices with sparsity k, in three settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N approaching zero, and c) n/N approaching zero with k/n decaying inverse logrithmically in N/n; in these three settings the RICs a) decay to zero, b) become unbounded (or approach inherent bounds), and c) approach a non-zero constant. Implications of these results for RIC based analysis of compressed sensing algorithms are presented.Comment: 40 pages, 5 figure

    Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing

    Full text link
    We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbors for these graphs, and present tail bounds on the probability that this cardinality will be less than the expected value. Deducible from these bounds are similar bounds for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets. Key to this analysis is a novel {\em dyadic splitting} technique. The analysis led to the derivation of better order constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing sampling theorems when using sparse non mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure

    On the construction of sparse matrices from expander graphs

    Full text link
    We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in [4]. With better bounds we derived a new reduced sample complexity for the number of nonzeros per column of these matrices, precisely d=O(logs(N/s))d = \mathcal{O}\left(\log_s(N/s) \right); as opposed to the standard d=O(log(N/s))d = \mathcal{O}\left(\log(N/s) \right). This gives insights into why using small dd performed well in numerical experiments involving such matrices. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching.Comment: 28 pages, 4 figure

    On the Construction of Sparse Matrices From Expander Graphs

    Get PDF
    We revisit the asymptotic analysis of probabilistic construction of adjacency matrices of expander graphs proposed in Bah and Tanner [1]. With better bounds we derived a new reduced sample complexity for d, the number of non-zeros per column of these matrices (or equivalently the left-degree of the underlying expander graph). Precisely d=O(logs(N/s)); as opposed to the standard d=O(log(N/s)), where N is the number of columns of the matrix (also the cardinality of set of left vertices of the expander graph) or the ambient dimension of the signals that can be sensed by such matrices. This gives insights into why using such sensing matrices with small d performed well in numerical compressed sensing experiments. Furthermore, we derive quantitative sampling theorems for our constructions which show our construction outperforming the existing state-of-the-art. We also used our results to compare performance of sparse recovery algorithms where these matrices are used for linear sketching

    Improved Bounds on Restricted Isometry Constants for Gaussian Matrices

    Get PDF
    The Restricted Isometry Constants (RIC) of a matrix AA measures how close to an isometry is the action of AA on vectors with few nonzero entries, measured in the 2\ell^2 norm. Specifically, the upper and lower RIC of a matrix AA of size n×Nn\times N is the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all (Nk){N\choose k} matrices formed by taking kk columns from AA. Calculation of the RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes (k,n,N)(k,n,N). We provide the best known bound on the RIC for Gaussian matrices, which is also the smallest known bound on the RIC for any large rectangular matrix. Improvements over prior bounds are achieved by exploiting similarity of singular values for matrices which share a substantial number of columns.Comment: 16 pages, 8 figure

    Restricted isometry constants in compressed sensing

    Get PDF
    Compressed Sensing (CS) is a framework where we measure data through a non-adaptive linear mapping with far fewer measurements that the ambient dimension of the data. This is made possible by the exploitation of the inherent structure (simplicity) in the data being measured. The central issues in this framework is the design and analysis of the measurement operator (matrix) and recovery algorithms. Restricted isometry constants (RIC) of the measurement matrix are the most widely used tool for the analysis of CS recovery algorithms. The addition of the subscripts 1 and 2 below reflects the two RIC variants developed in the CS literature, they refer to the ℓ1-norm and ℓ2-norm respectively. The RIC2 of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the ℓ2-norm. This, and related quantities, provide a mechanism by which standard eigen-analysis can be applied to topics relying on sparsity. Specifically, the upper and lower RIC2 of a matrix A of size n × N is the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all (N/k)matrices formed by taking k columns from A. Calculation of the RIC2 is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC2 in some range of problem sizes (k, n,N). We provide the best known bound on the RIC2 for Gaussian matrices, which is also the smallest known bound on the RIC2 for any large rectangular matrix. Our results are built on the prior bounds of Blanchard, Cartis, and Tanner in Compressed Sensing: How sharp is the Restricted Isometry Property?, with improvements achieved by grouping submatrices that share a substantial number of columns. RIC2 bounds have been presented for a variety of random matrices, matrix dimensions and sparsity ranges. We provide explicit formulae for RIC2 bounds, of n × N Gaussian matrices with sparsity k, in three settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N approaching zero, and c) n/N approaching zero with k/n decaying inverse logarithmically in N/n; in these three settings the RICs a) decay to zero, b) become unbounded (or approach inherent bounds), and c) approach a non-zero constant. Implications of these results for RIC2 based analysis of CS algorithms are presented. The RIC2 of sparse mean zero random matrices can be bounded by using concentration bounds of Gaussian matrices. However, this RIC2 approach does not capture the benefits of the sparse matrices, and in so doing gives pessimistic bounds. RIC1 is a variant of RIC2 where the nearness to an isometry is measured in the ℓ1-norm, which is both able to better capture the structure of sparse matrices and allows for the analysis of non-mean zero matrices. We consider a probabilistic construction of sparse random matrices where each column has a fixed number of non-zeros whose row indices are drawn uniformly at random. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbours for these graphs, and present a tail bound on the probability that this cardinality will be less than the expected value. Deducible from this bound is a similar bound for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets using a dyadic splitting technique. This bound allows for quantitative sampling theorems on existence of expander graphs and the sparse random matrices we consider and also quantitative CS sampling theorems when using sparse non mean-zero measurement matrices
    corecore